39 research outputs found

    Commodity Trade Stabilization Through International Agreements

    Get PDF
    We introduce a simple and efficient procedure for the segmentation of rigidly moving objects, imaged under an affine camera model. For this purpose we revisit the theory of "linear combination of views" (LCV), proposed by Ullman and Basri [20], which states that the set of 2d views of an object undergoing 3d rigid transformations, is embedded in a low-dimensional linear subspace that is spanned by a small number of basis views. Our work shows, that one may use this theory for motion segmentation, and cluster the trajectories of 3d objects using only two 2d basis views. We therefore propose a practical motion segmentation method, built around LCV, that is very simple to implement and use, and in addition is very fast, meaning it is well suited for real-time SfM and tracking applications. We have experimented on real image sequences, where we show good segmentation results, comparable to the state-of-the-art in literature. If we also consider computational complexity, our proposed method is one of the best performers in combined speed and accuracy. © 2011. The copyright of this document resides with its authors

    Teaching Policy Instrument Choice in Environmental Law: The Five P’s

    Get PDF
    Abstract. We introduce a method to combine the color channels of an image to a scalar valued image. Linear combinations of the RGB chan-nels are constructed using the Fisher-Trace-Information (FTI), defined as the trace of the Fisher information matrix of the Weibull distribu-tion, as a cost function. The FTI characterizes the local geometry of the Weibull manifold independent of the parametrization of the distribution. We show that minimizing the FTI leads to contrast enhanced images, suitable for segmentation processes. The Riemann structure of the man-ifold of Weibull distributions is used to design optimization methods for finding optimal weight RGB vectors. Using a threshold procedure we find good solutions even for images with limited content variation. Ex-periments show how the method adapts to images with widely varying visual content. Using these image dependent de-colorizations one can ob-tain substantially improved segmentation results compared to a mapping with pre-defined coefficients

    Enhancing motion segmentation by combination of complementary affinities

    No full text
    N.B.: When citing this work, cite the original article

    Enhancing motion segmentation by combination of complementary affinities

    No full text
    Complementary information, when combined in the right way, is capable of improving clustering and segmentation problems. In this paper, we show how it is possible to enhance motion segmentation accuracy with a very simple and inexpensive combination of complementary information, which comes from the column and row spaces of the same measurement matrix. We test our approach on the Hopkins155 dataset where it outperforms all other state-of-the-art methods.GARNIC

    http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-86211 Online Learning for Fast Segmentation of Moving Objects

    No full text
    Abstract. This work addresses the problem of fast, online segmentation of moving objects in video. We pose this as a discriminative online semi-supervised appearance learning task, where supervising labels are autonomously generated by a motion segmentation algorithm. The computational complexity of the approach is significantly reduced by performing learning and classification on oversegmented image regions (superpixels), rather than per pixel. In addition, we further exploit the sparse trajectories from the motion segmentation to obtain a simple model that encodes the spatial properties and location of objects at each frame. Fusing these complementary cues produces good object segmentations at very low computational cost. In contrast to previous work, the proposed approach (1) performs segmentation on-the-fly (allowing for applications where data arrives sequentially), (2) has no prior model of object types or ‘objectness’, and (3) operates at significantly reduced computational cost. The approach and its ability to learn, disambiguate and segment the moving objects in the scene is evaluated on a number of benchmark video sequences.

    http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-52181 Affine Invariant, Model-Based Object Recognition Using Robust Metrics and Bayesian Statistics

    No full text
    N.B.: When citing this work, cite the original article. The original publication is available at www.springerlink.com
    corecore